76 research outputs found
A framework for generalized group testing with inhibitors and its potential application in neuroscience
The main goal of group testing with inhibitors (GTI) is to efficiently
identify a small number of defective items and inhibitor items in a large set
of items. A test on a subset of items is positive if the subset satisfies some
specific properties. Inhibitor items cancel the effects of defective items,
which often make the outcome of a test containing defective items negative.
Different GTI models can be formulated by considering how specific properties
have different cancellation effects. This work introduces generalized GTI
(GGTI) in which a new type of items is added, i.e., hybrid items. A hybrid item
plays the roles of both defectives items and inhibitor items. Since the number
of instances of GGTI is large (more than 7 million), we introduce a framework
for classifying all types of items non-adaptively, i.e., all tests are designed
in advance. We then explain how GGTI can be used to classify neurons in
neuroscience. Finally, we show how to realize our proposed scheme in practice
Efficiently Decodable Non-Adaptive Threshold Group Testing
We consider non-adaptive threshold group testing for identification of up to
defective items in a set of items, where a test is positive if it
contains at least defective items, and negative otherwise.
The defective items can be identified using tests with
probability at least for any or tests with probability 1. The decoding time is
. This result significantly improves the
best known results for decoding non-adaptive threshold group testing:
for probabilistic decoding, where
, and for deterministic decoding
On the Transferability of Adversarial Examples between Encrypted Models
Deep neural networks (DNNs) are well known to be vulnerable to adversarial
examples (AEs). In addition, AEs have adversarial transferability, namely, AEs
generated for a source model fool other (target) models. In this paper, we
investigate the transferability of models encrypted for adversarially robust
defense for the first time. To objectively verify the property of
transferability, the robustness of models is evaluated by using a benchmark
attack method, called AutoAttack. In an image-classification experiment, the
use of encrypted models is confirmed not only to be robust against AEs but to
also reduce the influence of AEs in terms of the transferability of models.Comment: to be appear in ISPACS 202
Hindering Adversarial Attacks with Multiple Encrypted Patch Embeddings
In this paper, we propose a new key-based defense focusing on both efficiency
and robustness. Although the previous key-based defense seems effective in
defending against adversarial examples, carefully designed adaptive attacks can
bypass the previous defense, and it is difficult to train the previous defense
on large datasets like ImageNet. We build upon the previous defense with two
major improvements: (1) efficient training and (2) optional randomization. The
proposed defense utilizes one or more secret patch embeddings and classifier
heads with a pre-trained isotropic network. When more than one secret
embeddings are used, the proposed defense enables randomization on inference.
Experiments were carried out on the ImageNet dataset, and the proposed defense
was evaluated against an arsenal of state-of-the-art attacks, including
adaptive ones. The results show that the proposed defense achieves a high
robust accuracy and a comparable clean accuracy compared to the previous
key-based defense.Comment: To appear in APSIPA ASC 202
- β¦